304 research outputs found

    Efficient Adaptive Sobel and Joint Significance Tests for Mediation Effects

    Full text link
    Mediation analysis is an important statistical tool in many research fields. Its aim is to investigate the mechanism along the causal pathway between an exposure and an outcome. Particularly, the Sobel test and joint significance test are two popular statistical methods for testing mediation effects in practice. However, the drawback of both mediation testing methods is arising from the conservative type I error, which has reduced their powers and imposed some restrictions on their popularity and usefulness. As a matter of fact, this limitation is long-standing for the two methods in the literature. To fill this gap, we propose two novel data-adaptive tests for mediation effects, namely the adaptive Sobel test and the adaptive joint significance test, which have significant improvements over traditional Sobel and joint significance tests. Meanwhile, the proposed method is user-friendly without involving complicated procedures. The explicit expressions for size and power are derived, which ensure the theoretical rationality of our method. Furthermore, we extend the proposed adaptive Sobel and adaptive joint significance tests for multiple mediators with family-wise error rate (FWER) control. Extensive simulations are conducted to evaluate the performance of our mediation testing procedure. Finally, we illustrate the usefulness of our method by analysing three real-world datasets with continuous, binary and time-to-event outcomes, respectively

    A Framework for Mediation Analysis with Massive Data

    Full text link
    During recent years, mediation analysis has become increasingly popular in many research fields. Basically, the aim of mediation analysis is to investigate the direct effect of exposure on outcome together with indirect effects along the pathways from exposure to outcome. There has been a great number of articles that applied mediation analysis to data from hundreds or thousands of individuals. With the rapid development of technology, the volume of avaliable data increases exponentially, which brings new challenges to researchers. It is often computationally infeasible to directly conduct statistical analysis for large datasets. However, there are very few results on mediation analysis with massive data. In this paper, we propose to use the subsampled double bootstrap as well as divide-and-conquer algorithm to perform statistical mediation analysis for large-scale dataset. Extensive numerical simulations are conducted to evaluate the performance of our method. Two real data examples are also provided to illustrate the usefulness of our approach in practical application

    miR-181a increases FoxO1 acetylation and promotes granulosa cell apoptosis via SIRT1 downregulation.

    Get PDF
    Oxidative stress impairs follicular development by inducing granulosa cell (GC) apoptosis, which involves enhancement of the transcriptional activity of the pro-apoptotic factor Forkhead box O1 (FoxO1). However, the mechanism by which oxidative stress promotes FoxO1 activity is still unclear. Here, we found that miR-181a was upregulated in hydrogen peroxide (

    Approximating Partial Likelihood Estimators via Optimal Subsampling

    Full text link
    With the growing availability of large-scale biomedical data, it is often time-consuming or infeasible to directly perform traditional statistical analysis with relatively limited computing resources at hand. We propose a fast and stable subsampling method to effectively approximate the full data maximum partial likelihood estimator in Cox's model, which reduces the computational burden when analyzing massive survival data. We establish consistency and asymptotic normality of a general subsample-based estimator. The optimal subsampling probabilities with explicit expressions are determined via minimizing the trace of the asymptotic variance-covariance matrix for a linearly transformed parameter estimator. We propose a two-step subsampling algorithm for practical implementation, which has a significant reduction in computing time compared to the full data method. The asymptotic properties of the resulting two-step subsample-based estimator is established. In addition, a subsampling-based Breslow-type estimator for the cumulative baseline hazard function and a subsample estimated survival function are presented. Extensive experiments are conducted to assess the proposed subsampling strategy. Finally, we provide an illustrative example about large-scale lymphoma cancer dataset from the Surveillance, Epidemiology,and End Results Program

    A Unified Complexity Metric for Nonconvex Matrix Completion and Matrix Sensing in the Rank-one Case

    Full text link
    In this work, we develop a new complexity metric for an important class of low-rank matrix optimization problems, where the metric aims to quantify the complexity of the nonconvex optimization landscape of each problem and the success of local search methods in solving the problem. The existing literature has focused on two complexity measures. The RIP constant is commonly used to characterize the complexity of matrix sensing problems. On the other hand, the sampling rate and the incoherence are used when analyzing matrix completion problems. The proposed complexity metric has the potential to unify these two notions and also applies to a much larger class of problems. To mathematically study the properties of this metric, we focus on the rank-11 generalized matrix completion problem and illustrate the usefulness of the new complexity metric from three aspects. First, we show that instances with the RIP condition have a small complexity. Similarly, if the instance obeys the Bernoulli sampling model, the complexity metric will take a small value with high probability. Moreover, for a one-parameter class of instances, the complexity metric shows consistent behavior to the first two scenarios. Furthermore, we establish theoretical results to provide sufficient conditions and necessary conditions on the existence of spurious solutions in terms of the proposed complexity metric. This contrasts with the RIP and incoherence notions that fail to provide any necessary condition

    Edge control in a computer controlled optical surfacing process using a heterocercal tool influence function

    Get PDF
    Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the ‘heterocercal’ tool influence function (TIF). Generated from compound motion equipment, this type of TIF can ‘transfer’ the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the ‘heterocercal’ TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope

    Structural and functional properties of OSA-starches made with wide-ranging hydrolysis approaches

    Get PDF
    Octenyl succinic anhydride modified starches (OSA-starches) are widely used as emulsifiers and stabilizers in the food industry. This study investigates the relationships between molecular structure and emulsifying and antioxidant properties of OSA-starches with a wide range of structures, formed by hydrolysis by α-amylase, β-amylase and HCl for various hydrolysis times. Structural parameters, namely molecular size distribution, chain-length distribution, degree of branching (DB) and degree of OSA substitution (DS) were characterized using size-exclusion chromatography and H nuclear magnetic resonance. These parameters were then correlated with viscosity, emulsification performance and antioxidant properties for OSA-stabilized oil emulsions, to gain improved understanding of structure-property relationships. The average chain length (DP) and DB respectively showed positive and negative correlations with the viscosity, total antioxidant activity (TAC), creaming extent and the emulsion z-average droplet size for all the hydrolyzed samples. The OSA-starches treated by α-amylase generally had the smallest average DP and largest DB, resulting in the lowest viscosity and the best droplet stability with the smallest creaming extent. The acid-hydrolyzed OSA-starch samples presented larger average DP than the enzyme-hydrolyzed samples, in agreement with their better TAC, while larger creaming extent. The β-amylase-hydrolyzed samples produced moderate structural degradation and emulsifying properties compared to the OSA-starches treated by α-amylase and HCl. The structure-property correlations indicate that the average chain length and DB are the two most important structural parameters in determination of the functional properties for the OSA-modified starches. These findings will help develop improved food additives with desired functions
    • …
    corecore